4,588 research outputs found

    Self-injective algebras under derived equivalences

    Full text link
    The Nakayama permutations of two derived equivalent, self-injective Artin algebras are conjugate. A different but elementary approach is given to showing that the weak symmetry and self-injectivity of finite-dimensional algebras over an arbitrary field are preserved under derived equivalences.Comment: 11 page

    Probing the topcolor-assisted technicolor model via the single t-quark production at Hadron colliders

    Full text link
    In this paper, we systematically study the contribution of the TC2 model to the single t-quark production at the Hadron colliders, specially at the LHC. The TC2 model can contribute to the cross section of the single t-quark production in two different ways. First, the existence of the top-pions and top-higgs can modify the WtbWtb coupling via their loop contributions, and such modification can cause the correction to the cross sections of all three production modes. Our study shows that this kind of correction is negative and very small in all cases. Thus it is difficult to observe such correction even at the LHC. On the other hand, there exist the tree-level FC couplings in the TC2 model which can also contribute to the cross sections of the tqtq and tbˉt\bar{b} production processes. The resonant effect can greatly enhance the cross sections of the tqtq and tbˉt\bar{b} productions. The first evidence of the single t-quark production has been reported by the D0D0 collaboration and the measured cross section for the single t-quark production of σ(ppˉ→tb+X,tqb+X)\sigma(p\bar{p}\to tb+X,tqb+X) is compatible at the 10% level with the standard model prediction. Because the light top-pion can make great contribution to the tbˉt\bar{b} production, the top-pion mass should be very large in order to make the predicted cross section in the TC2 model be consistent with the Tevatron experiments. More detailed information about the top-pion mass and the FC couplings in the TC2 model should be obtained with the running of the LHC.Comment: 30 pages, 3 tables, 10 figure

    A Simple and Effective Baseline for Attentional Generative Adversarial Networks

    Full text link
    Synthesising a text-to-image model of high-quality images by guiding the generative model through the Text description is an innovative and challenging task. In recent years, AttnGAN based on the Attention mechanism to guide GAN training has been proposed, SD-GAN, which adopts a self-distillation technique to improve the performance of the generator and the quality of image generation, and Stack-GAN++, which gradually improves the details and quality of the image by stacking multiple generators and discriminators. However, this series of improvements to GAN all have redundancy to a certain extent, which affects the generation performance and complexity to a certain extent. We use the popular simple and effective idea (1) to remove redundancy structure and improve the backbone network of AttnGAN. (2) to integrate and reconstruct multiple losses of DAMSM. Our improvements have significantly improved the model size and training efficiency while ensuring that the model's performance is unchanged and finally proposed our \textbf{SEAttnGAN}. Code is avalilable at https://github.com/jmyissb/SEAttnGAN.Comment: 12 pages, 3 figure
    • …
    corecore